11 research outputs found

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    Optimizing the use of InSAR observations in data assimilation problems to estimate reservoir compaction

    Get PDF
    Hydrocarbon production may cause subsidence as a result of the pressure reduction in the gas-producing layer and reservoir compaction. To analyze the process of subsidence and estimate reservoir parameters, we use a particle method to assimilate Interferometric synthetic-aperture radar (InSAR) observations of surface deformation with a conceptual model of reservoir. As example, we use an analytical model of the Groningen gas reservoir based on a geometry representing the compartmentalized structure of the subsurface at the reservoir depth. The efficacy of the particle method becomes less when the degree of freedom is large compared to the ensemble size. This degree of freedom, in turn, varies because of spatial correlation in the observed field. The resolution of the InSAR data and the number of observations affect the performance of the particle method. In this study, we quantify the information in a Sentinel-1 SAR dataset using the concept of Shannon entropy from information theory. We investigate how to best capture the level of detail in model resolved by the InSAR data while maximizing their information content for a data assimilation use. We show that incorrect representation of the existing correlations leads to weight collapse when the number of observation increases, unless the ensemble size growths. However, simulations of mutual information show that we could optimize data reduction by choosing an adequate mesh given the spatial correlation in the observed subsidence. Our approach provides a means to achieve a better information use from available InSAR data reducing weight collapse without additional computational cost

    An international initiative of predicting the Sars-Cov-2 pandemic using ensemble data assimilation

    Get PDF
    This work demonstrates the efficiency of using iterative ensemble smoothers to estimate the parameters of an SEIR model. We have extended a standard SEIR model with age-classes and compartments of sick, hospitalized, and dead. The data conditioned on are the daily numbers of accumulated deaths and the number of hospitalized. Also, it is possible to condition the model on the number of cases obtained from testing. We start from a wide prior distribution for the model parameters; then, the ensemble conditioning leads to a posterior ensemble of estimated parameters yielding model predictions in close agreement with the observations. The updated ensemble of model simulations has predictive capabilities and include uncertainty estimates. In particular, we estimate the effective reproductive number as a function of time, and we can assess the impact of different intervention measures. By starting from the updated set of model parameters, we can make accurate short-term predictions of the epidemic development assuming knowledge of the future effective reproductive number. Also, the model system allows for the computation of long-term scenarios of the epidemic under different assumptions. We have applied the model system on data sets from several countries, i.e., the four European countries Norway, England, The Netherlands, and France; the province of Quebec in Canada; the South American countries Argentina and Brazil; and the four US states Alabama, North Carolina, California, and New York. These countries and states all have vastly different developments of the epidemic, and we could accurately model the SARS-CoV-2 outbreak in all of them. We realize that more complex models, e.g., with regional compartments, may be desirable, and we suggest that the approach used here should be applicable also for these models

    Optimizing the use of InSAR observations in data assimilation problems to estimate reservoir compaction

    No full text
    Hydrocarbon production may cause subsidence as a result of the pressure reduction in the gas-producing layer and reservoir compaction. To analyze the process of subsidence and estimate reservoir parameters, we use a particle method to assimilate Interferometric synthetic-aperture radar (InSAR) observations of surface deformation with a conceptual model of reservoir. As example, we use an analytical model of the Groningen gas reservoir based on a geometry representing the compartmentalized structure of the subsurface at the reservoir depth. The efficacy of the particle method becomes less when the degree of freedom is large compared to the ensemble size. This degree of freedom, in turn, varies because of spatial correlation in the observed field. The resolution of the InSAR data and the number of observations affect the performance of the particle method. In this study, we quantify the information in a Sentinel-1 SAR dataset using the concept of Shannon entropy from information theory. We investigate how to best capture the level of detail in model resolved by the InSAR data while maximizing their information content for a data assimilation use. We show that incorrect representation of the existing correlations leads to weight collapse when the number of observation increases, unless the ensemble size growths. However, simulations of mutual information show that we could optimize data reduction by choosing an adequate mesh given the spatial correlation in the observed subsidence. Our approach provides a means to achieve a better information use from available InSAR data reducing weight collapse without additional computational cost

    Parameter estimation using a particle method: inferring mixing coefficients from sea level observations

    No full text
    This paper presents a first attempt to estimate mixing parameters from sea level observations using a particle method based on importance sampling. The method is applied to an ensemble of 128 members of model simulations with a global ocean general circulation model of high complexity. Idealized twin experiments demonstrate that the method is able to accurately reconstruct mixing parameters from an observed mean sea level field when mixing is assumed to be spatially homogeneous. An experiment with inhomogeneous eddy coefficients fails because of the limited ensemble size. This is overcome by the introduction of local weighting, which is able to capture spatial variations in mixing qualitatively. As the sensitivity of sea level for variations in mixing is higher for low values of mixing coefficients, the method works relatively well in regions of low eddy activity

    Balanced Ocean-Data Assimilation near the Equator

    No full text
    The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents

    Uncertainties in the mean ocean dynamic topography before the launch of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE)

    No full text
    In anticipation of the future observations of the gravity mission Gravity Field and Steady-State Ocean Circulation Explorer (GOCE), the present-day accuracy of mean dynamic topography (MDT) is estimated from both observations and models. A comparison of five observational estimates illustrates that RMS differences in MDT vary from 4.2 to 10.5 cm after low-pass filtering the fields with a Hamming window with wavenumber N = 120 (corresponding to an effective resolution of 167 km). RMS differences in observational MDT reduce to 2.4–8.3 cm for N = 15 (1334 km). Differences in data sources (geoid model, in situ data) are mostly visible in the small-scale oceanic features, while differences in processing (filtering, inverse modeling techniques) are reflected at larger scales. A comparison of seven different numerical ocean models demonstrates that model estimates differ mostly in western boundary currents and in the Antarctic Circumpolar Current. RMS differences between modeled and observed MDT are at best 8.8 cm for N = 120, and reduce to 6.4 cm for N = 15. For models with data assimilation, a minimal RMS difference of 6.6 cm (N = 120) to 3.4 cm (N = 15) is obtained with respect to the observational MDTs. The reduction of differences between MDTs with increasing filtering scales is smaller than expected. While it is expected that GOCE will improve MDT estimates at small spatial scales, improvement of mean sea surface estimates from satellite altimetry may be needed to improve MDT estimates at larger scales
    corecore